Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Stat Biosci ; 15(2): 384-396, 2023.
Article in English | MEDLINE | ID: covidwho-20235919

ABSTRACT

In 2020, the whole planet was plagued by the extremely deadly COVID-19 pandemic. More than 83 million people had been infected with COVID-19 while more than 1.9 million people around the planet had died from this virus in the first year of the pandemic. From the first moment, the medical community started working to deal with this pandemic. For this reason, many clinical trials have been and continue to be conducted to find a safe and efficient cure for the virus. In this paper, we review the 96 clinical trials, registered in the ClinicalTrials.gov database, that had been completed by the end of the first year of the pandemic. Although the clinical trials contained significant heterogeneity in the main methodological features (enrollment, duration, allocation, intervention model, and masking) they seemed to be conducted based on an appropriate methodological basis.

2.
Trials ; 23(1): 458, 2022 Jun 02.
Article in English | MEDLINE | ID: covidwho-2318220

ABSTRACT

BACKGROUND: At the 2015 REWARD/EQUATOR conference on research waste, the late Doug Altman revealed that his only regret about his 1994 BMJ paper 'The scandal of poor medical research' was that he used the word 'poor' rather than 'bad'. But how much research is bad? And what would improve things? MAIN TEXT: We focus on randomised trials and look at scale, participants and cost. We randomly selected up to two quantitative intervention reviews published by all clinical Cochrane Review Groups between May 2020 and April 2021. Data including the risk of bias, number of participants, intervention type and country were extracted for all trials included in selected reviews. High risk of bias trials was classed as bad. The cost of high risk of bias trials was estimated using published estimates of trial cost per participant. We identified 96 reviews authored by 546 reviewers from 49 clinical Cochrane Review Groups that included 1659 trials done in 84 countries. Of the 1640 trials providing risk of bias information, 1013 (62%) were high risk of bias (bad), 494 (30%) unclear and 133 (8%) low risk of bias. Bad trials were spread across all clinical areas and all countries. Well over 220,000 participants (or 56% of all participants) were in bad trials. The low estimate of the cost of bad trials was £726 million; our high estimate was over £8 billion. We have five recommendations: trials should be neither funded (1) nor given ethical approval (2) unless they have a statistician and methodologist; trialists should use a risk of bias tool at design (3); more statisticians and methodologists should be trained and supported (4); there should be more funding into applied methodology research and infrastructure (5). CONCLUSIONS: Most randomised trials are bad and most trial participants will be in one. The research community has tolerated this for decades. This has to stop: we need to put rigour and methodology where it belongs - at the centre of our science.


Subject(s)
Biomedical Research , Research Personnel , Emotions , Humans , Male , Research Design , Reward
3.
Syst Rev ; 12(1): 55, 2023 03 27.
Article in English | MEDLINE | ID: covidwho-2257817

ABSTRACT

In this letter, we briefly describe how we selected and implemented the quality criteria checklist (QCC) as a critical appraisal tool in rapid systematic reviews conducted to inform public health advice, guidance and policy during the COVID-19 pandemic. As these rapid reviews usually included a range of study designs, it was key to identify a single tool that would allow for reliable critical appraisal across most experimental and observational study designs and applicable to a range of topics. After carefully considering a number of existing tools, the QCC was selected as it had good interrater agreement between three reviewers (Fleiss kappa coefficient 0.639) and was found to be easy and fast to apply once familiar with the tool. The QCC consists of 10 questions, with sub-questions to specify how it should be applied to a specific study design. Four of these questions are considered as critical (on selection bias, group comparability, intervention/exposure assessment and outcome assessment) and the rating of a study (high, moderate or low methodological quality) depends on the responses to these four critical questions. Our results suggest that the QCC is an appropriate critical appraisal tool to assess experimental and observational studies within COVID-19 rapid reviews. This study was done at pace during the COVID-19 pandemic; further reliability analyses should be conducted, and more research is needed to validate the QCC across a range of public health topics.


Subject(s)
COVID-19 , Humans , Reproducibility of Results , Pandemics , Checklist , Public Health
4.
R Soc Open Sci ; 10(1): 201543, 2023 Jan.
Article in English | MEDLINE | ID: covidwho-2245314

ABSTRACT

There have been reports of poor-quality research during the COVID-19 pandemic. This registered report assessed design characteristics of registered clinical trials for COVID-19 compared to non-COVID-19 trials to empirically explore the design of clinical research during a pandemic and how it compares to research conducted in non-pandemic times. We did a retrospective cohort study with a 1 : 1 ratio of interventional COVID-19 registrations to non-COVID-19 registrations, with four trial design outcomes: use of control arm, randomization, blinding and prospective registration. Logistic regression was used to estimate the odds ratio of investigating COVID-19 versus not COVID-19 and estimate direct and total effects of investigating COVID-19 for each outcome. The primary analysis showed a positive direct and total effect of COVID-19 on the use of control arms and randomization. It showed a negative direct effect of COVID-19 on blinding but no evidence of a total effect. There was no evidence of an effect on prospective registration. Taken together with secondary and sensitivity analyses, our findings are inconclusive but point towards a higher prevalence of key design characteristics in COVID-19 trials versus controls. The findings do not support much existing COVID-19 research quality literature, which generally suggests that COVID-19 led to a reduction in quality. Limitations included some data quality issues, minor deviations from the pre-registered plan and the fact that trial registrations were analysed which may not accurately reflect study design and conduct. Following in-principle acceptance, the approved stage 1 version of this manuscript was pre-registered on the Open Science Framework at https://doi.org/10.17605/OSF.IO/5YAEB. This pre-registration was performed prior to data analysis.

5.
IEEE Transactions on Instrumentation and Measurement ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-1909267

ABSTRACT

Coronavirus 2019 (COVID-19) has led to a global pandemic infecting 224 million people and has caused 4.6 million deaths. Nearly 80 Artificial Intelligence (AI) articles have been published on COVID-19 diagnosis. The first systematic review on the Deep Learning (DL)-based paradigm for COVID-19 diagnosis was recently published by Suri et al. [IEEE J Biomed Health Inform. 2021]. The above study used AtheroPoint’s “AP(ai)Bias 1.0”using 10 AI attributes in the DL framework. The proposed study uses “AP(ai)Bias 2.0”as part of the three quantitative paradigms for Risk-of-Bias quantification by using the best 40 dedicated Hybrid DL (HDL) studies and utilizing 39 AI attributes. In the first method, the radial-bias map (RBM) was computed for each AI study, followed by the computation of bias value. In the second method, the regional-bias area (RBA) was computed by the area difference between the best and the worst AI performing attributes. In the third method, ranking-bias score (RBS) was computed, where AI-based cumulative scores were computed for all the 40 studies. These studies were ranked, and the cutoff was determined, categorizing the HDL studies into three bins: low, moderate, and high. Using the Venn diagram, these three quantitative methods were benchmarked against the two qualitative non-randomized-based AI trial methods (ROBINS-I and PROBAST). Using the analytically derived moderate-high and low-moderate cutoff of 2.9 and 3.6, respectively, we observed 40%, 27.5%, 17.5%, 10%, and 20% of studies were low-biased for RBM, RBA, RBS, ROBINS-I, and PROBAST, respectively. We present an eight-point recommendation for AP(ai)Bias 2.0 minimization. IEEE

6.
Int J Infect Dis ; 122: 72-80, 2022 Sep.
Article in English | MEDLINE | ID: covidwho-1899778

ABSTRACT

OBJECTIVES: This study aimed to describe the prevalence of risks of bias in randomized trials of therapeutic interventions for COVID-19. METHODS: Systematic review and risk of bias assessment performed by two independent reviewers of a random sample of 40 randomized trials of therapeutic interventions for moderate-severe COVID-19. We used the RoB 2.0 tool to assess the risk of bias, which evaluates bias under five domains as well as an overall assessment of each trial as high or low risk of bias. RESULTS: Of the 40 included trials, 19 (47%) were at high risk of bias, and this was particularly frequent in trials from low-middle income countries (11/14, 79%). Potential deviations to intended interventions (i.e., control participants accessing experimental treatments) were considered a potential source of bias in some studies (14, 35%), as was the risk due to selective reporting of results (6, 15%). The randomization process was considered at low risk of bias in most studies (34, 95%), as were missing data (36, 90%) and measurement of the outcome (35, 87%). CONCLUSION: Many randomized trials evaluating COVID-19 interventions are at risk of bias, particularly those conducted in low-middle income countries. Biases are mostly due to deviations from intended interventions and partly due to the selection of reported results. The use of placebo control and publicly available protocol can mitigate many of these risks.


Subject(s)
COVID-19 , Bias , COVID-19/epidemiology , Humans , Randomized Controlled Trials as Topic , Research Design
7.
Front Public Health ; 9: 680967, 2021.
Article in English | MEDLINE | ID: covidwho-1771108

ABSTRACT

Objective: The risk prediction model is an effective tool for risk stratification and is expected to play an important role in the early detection and prevention of esophageal cancer. This study sought to summarize the available evidence of esophageal cancer risk predictions models and provide references for their development, validation, and application. Methods: We searched PubMed, EMBASE, and Cochrane Library databases for original articles published in English up to October 22, 2021. Studies that developed or validated a risk prediction model of esophageal cancer and its precancerous lesions were included. Two reviewers independently extracted study characteristics including predictors, model performance and methodology, and assessed risk of bias and applicability with PROBAST (Prediction model Risk Of Bias Assessment Tool). Results: A total of 20 studies including 30 original models were identified. The median area under the receiver operating characteristic curve of risk prediction models was 0.78, ranging from 0.68 to 0.94. Age, smoking, body mass index, sex, upper gastrointestinal symptoms, and family history were the most commonly included predictors. None of the models were assessed as low risk of bias based on PROBST. The major methodological deficiencies were inappropriate date sources, inconsistent definition of predictors and outcomes, and the insufficient number of participants with the outcome. Conclusions: This study systematically reviewed available evidence on risk prediction models for esophageal cancer in general populations. The findings indicate a high risk of bias due to several methodological pitfalls in model development and validation, which limit their application in practice.


Subject(s)
Esophageal Neoplasms , Humans
9.
J Clin Epidemiol ; 139: 68-79, 2021 11.
Article in English | MEDLINE | ID: covidwho-1466592

ABSTRACT

OBJECTIVE: To describe the characteristics of Covid-19 randomized clinical trials (RCTs) and examine the association between trial characteristics and the likelihood of finding a significant effect. STUDY DESIGN: We conducted a systematic review to identify RCTs (up to October 21, 2020) evaluating drugs or blood products to treat or prevent Covid-19. We extracted trial characteristics (number of centers, funding sources, and sample size) and assessed risk of bias (RoB) using the Cochrane RoB 2.0 tool. We performed logistic regressions to evaluate the association between RoB due to randomization, single vs. multicentre, funding source, and sample size, and finding a statistically significant effect. RESULTS: We included 91 RCTs (n = 46,802); 40 (44%) were single-center, 23 (25.3%) enrolled <50 patients, 28 (30.8%) received industry funding, and 75 (82.4%) had high or probably high RoB. Thirty-eight trials (41.8%) reported a statistically significant effect. RoB due to randomization and being a single-center trial were associated with increased odds of finding a statistically significant effect. CONCLUSIONS: There is high variability in RoB among Covid-19 trials. Researchers, funders, and knowledge-users should be cognizant of the impact of RoB due to randomization and single-center trial status in designing, evaluating, and interpreting the results of RCTs. REGISTRATION: CRD42020192095.


Subject(s)
COVID-19/prevention & control , Randomized Controlled Trials as Topic/methods , Research Design/standards , COVID-19/epidemiology , Epidemiologic Studies , Humans
10.
BMC Med Res Methodol ; 21(1): 175, 2021 08 21.
Article in English | MEDLINE | ID: covidwho-1443791

ABSTRACT

BACKGROUND: Randomized controlled trials (RCT) are considered the ideal design for evaluating the efficacy of interventions. However, conducting a successful RCT has technological and logistical challenges. Defects in randomization processes (e.g., allocation sequence concealment) and flawed masking could bias an RCT's findings. Moreover, investigators need to address other logistics common to all study designs, such as study invitations, eligibility screening, consenting procedure, and data confidentiality protocols. Research Electronic Data Capture (REDCap) is a secure, browser-based web application widely used by researchers for survey data collection. REDCap offers unique features that can be used to conduct rigorous RCTs. METHODS: In September and November 2020, we conducted a parallel group RCT among Indiana University Bloomington (IUB) undergraduate students to understand if receiving the results of a SARS-CoV-2 antibody test changed the students' self-reported protective behavior against coronavirus disease 2019 (COVID-19). In the current report, we discuss how we used REDCap to conduct the different components of this RCT. We further share our REDCap project XML file and instructional videos that investigators can use when designing and conducting their RCTs. RESULTS: We reported on the different features that REDCap offers to complete various parts of a large RCT, including sending study invitations and recruitment, eligibility screening, consenting procedures, lab visit appointment and reminders, data collection and confidentiality, randomization, blinding of treatment arm assignment, returning test results, and follow-up surveys. CONCLUSIONS: REDCap offers powerful tools for longitudinal data collection and conduct of rigorous and successful RCTs. Investigators can make use of this electronic data capturing system to successfully complete their RCTs. TRIAL REGISTRATION: The RCT was prospectively (before completing data collection) registered at ClinicalTrials.gov; registration number: NCT04620798 , date of registration: November 9, 2020.


Subject(s)
COVID-19 , Research Design , Electronics , Humans , Randomized Controlled Trials as Topic , SARS-CoV-2 , Surveys and Questionnaires
11.
J Eval Clin Pract ; 27(5): 1123-1133, 2021 10.
Article in English | MEDLINE | ID: covidwho-1218146

ABSTRACT

RATIONALE, AIMS, AND OBJECTIVES: COVID-19 has caused an ongoing public health crisis. Many systematic reviews and meta-analyses have been performed to synthesize evidence for better understanding this new disease. However, some concerns have been raised about rapid COVID-19 research. This meta-epidemiological study aims to methodologically assess the current systematic reviews and meta-analyses on COVID-19. METHODS: We searched in various databases for systematic reviews with meta-analyses published between 1 January 2020 and 31 October 2020. We extracted their basic characteristics, data analyses, evidence appraisal, and assessment of publication bias and heterogeneity. RESULTS: We identified 295 systematic reviews on COVID-19. The median time from submission to acceptance was 33 days. Among these systematic reviews, 73.9% evaluated clinical manifestations or comorbidities of COVID-19. Stata was the most used software programme (43.39%). The odds ratio was the most used effect measure (34.24%). Moreover, 28.14% of the systematic reviews did not present evidence appraisal. Among those reporting the risk of bias results, 14.64% of studies had a high risk of bias. Egger's test was the most used method for assessing publication bias (38.31%), while 38.66% of the systematic reviews did not assess publication bias. The I2 statistic was widely used for assessing heterogeneity (92.20%); many meta-analyses had high values of I2 . Among the meta-analyses using the random-effects model, 75.82% did not report the methods for model implementation; among those meta-analyses reporting implementation methods, the DerSimonian-Laird method was the most used one. CONCLUSIONS: The current systematic reviews and meta-analyses on COVID-19 might suffer from low transparency, high heterogeneity, and suboptimal statistical methods. It is recommended that future systematic reviews on COVID-19 strictly follow well-developed guidelines. Sensitivity analyses may be performed to examine how the synthesized evidence might depend on different methods for appraising evidence, assessing publication bias, and implementing meta-analysis models.


Subject(s)
COVID-19 , Epidemiologic Studies , Humans , Publication Bias , SARS-CoV-2 , Systematic Reviews as Topic
12.
Soc Psychiatry Psychiatr Epidemiol ; 56(7): 1147-1160, 2021 Jul.
Article in English | MEDLINE | ID: covidwho-1188080

ABSTRACT

PURPOSE: To assess the quality of the research about how employment conditions and psychosocial workplace exposures impact the mental health of young workers, and to summarize the available evidence. METHODS: We undertook a systematic search of three databases using a tiered search strategy. Studies were included if they: (a) assessed employment conditions such as working hours, precarious employment, contract type, insecurity, and flexible work, or psychosocial workplace exposures such as violence, harassment and bullying, social support, job demand and control, effort-reward imbalance, and organizational justice; (b) included a validated mental health measure; and (c) presented results specific to young people aged ≤ 30 years or were stratified by age group to provide an estimate for young people aged ≤ 30 years. The quality of included studies was assessed using the Risk of Bias in Non-randomized Studies of Exposures (ROBINS-E) tool. RESULTS: Nine studies were included in the review. Four were related to employment conditions, capturing contract type and working hours. Five studies captured concepts relevant to psychosocial workplace exposures including workplace sexual harassment, psychosocial job quality, work stressors, and job control. The quality of the included studies was generally low, with six of the nine at serious risk of bias. Three studies at moderate risk of bias were included in the qualitative synthesis, and results of these showed contemporaneous exposure to sexual harassment and poor psychosocial job quality was associated with poorer mental health outcomes among young workers. Longitudinal evidence showed that exposure to low job control was associated with incident depression diagnosis among young workers. CONCLUSIONS: The findings of this review illustrate that even better studies are at moderate risk of bias. Addressing issues related to confounding, selection of participants, measurement of exposures and outcomes, and missing data will improve the quality of future research in this area and lead to a clearer understanding of how employment conditions and psychosocial workplace exposures impact the mental health of young people. Generating high-quality evidence is particularly critical given the disproportionate impact of COVID-19 on young people's employment. In preparing for a post-pandemic world where poor-quality employment conditions and exposure to psychosocial workplace exposures may become more prevalent, rigorous research must exist to inform policy to protect the mental health of young workers.


Subject(s)
Employment , Mental Health , Workplace , Adult , Humans , Organizational Culture , Social Justice , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL